Homework 2 solution

1.

% I didn't specify the image depths, so I won't take off unless you used a different value for each channel.
img = zeros(2, 2, 3);
img(1, 1, 1) = 255;
img(1, 2, 2) = 255;
img(2, :, 3) = 255;
img
img =
img(:,:,1) = 255 0 0 0 img(:,:,2) = 0 255 0 0 img(:,:,3) = 0 0 255 255

2.

load ~/Dropbox/Research/Teaching/BMI_567_2018/Homework/Assignment2/homework_images.mat
% Check depth to see if I'll need to scale for viewing
min(bucky_images(:))
ans = 0
max(bucky_images(:))
ans = 63
% Already 6 bit depth, so no need to scale
mn_vec = zeros(1, 8);
sd_vec = zeros(1,8);
rng_vec = zeros(1,8);
for i=1:8
dat_loop = bucky_images(:,:,i);
mn_vec(i) = mean(dat_loop(:));
noise_bit = bucky_images(1:351, 1:60,i);
sd_vec(i) = std(noise_bit(:));
rng_vec(i) = max(dat_loop(:)) - min(dat_loop(:));
end
snr_vec = mn_vec./sd_vec;
cnr_vec = rng_vec./sd_vec;
[ord_snr, i_snr] = sort(snr_vec);
[ord_cnr, i_cnr] = sort(cnr_vec);
(a) Unordered SNRs
snr_vec
snr_vec =
4.2663 4.3273 4.3252 5.8845 4.2900 4.1937 4.3765 7.4237
(b)
figure
colormap(gray)
for i =1:8
subplot(2, 4, i)
image(bucky_images(:,:, i_snr(i)))
end
(c) CNR's and images sorted by CNR
cnr_vec
cnr_vec =
8.5804 8.6272 9.6024 11.1182 9.0741 8.6253 8.7611 13.5765
figure
colormap(gray)
for i =1:8
subplot(2, 4, i)
imagesc(bucky_images(:,:, i_cnr(i)))
end
To help understand why the order differed, I looked at the numerators and denominator of SNR and CNR, separated. The 3rd and 7th images are what I foused on, since the 7th had a high SNR and low CNR and the opposite was true for image 3. Since the image range is the same for all images, the CNR is completely determined by the standard deviation of the image. One way to think of this issue is that each of SNR and CNR has 2 pieces of information, the numerator and denominator. Due to how CNR is calculated here (there are other definitions of CNR), the measure in the numerator isn't helpful as it is constant across the images. On the other hand, both the mean and variance differ across images, so the SNR has more information to work with in this case. So, although image 7 has a fairly high variance (7.1909), making the CNR low the mean is on the higher end, so SNR is higher. For image 3, CNR only has the variance information and the variance is pretty low, making CNR larger. On the other hand, the mean is the among the lowest of all images, which brings the SNR down. Generally the CNR would be more useful if there was more contrast in the image and if the range of values differed across images. Since the range is the same across all of these, there isn't much contrast and CNR is basically based on one piece of helpful information: the varaince. SNR, in this case has 2 pieces of helpful information, the mean and the variance. If, for some reason, the mean intensity was the same across all images, but only the range and variance differed, the CNR would probalby work better than SNR.
i_snr
i_snr =
6 1 5 3 2 7 4 8
i_cnr
i_cnr =
1 6 2 7 5 3 4 8
mn_vec
mn_vec =
31.3244 31.6002 28.3768 33.3438 29.7846 30.6314 31.4708 34.4486
rng_vec
rng_vec =
63 63 63 63 63 63 63 63
sd_vec
sd_vec =
7.3423 7.3025 6.5609 5.6664 6.9428 7.3041 7.1909 4.6404

3.

load('~/Dropbox/Research/Teaching/BMI_567_2018/Homework/Assignment2/bucky_trio.mat')
min(bucky_trio(:))
ans = 0
max(bucky_trio(:))
ans = 315
(a) Not a 6 bit image, so I'll convert it so I can display it best using image().
imin=min(min(bucky_trio));
bucky64=bucky_trio-imin;
imax= floor(max(max(bucky64)));
bucky64= floor(bucky64/imax*63);
figure
image(bucky64)
colormap(gray)
(b) Since I know the exact portion of the image I would like to pull out for each panel of the plot below, specifically the first, second and third bucky, I can choose my mean and variance for the sigmoid function in a systematic manner by simply using the mean and sd of these portions of the images. If you guessed the values, I will deduct points.
I use the sigmoid, but it is okay if you used the window function. If you used the histogram to guess the values, the histogram of the full image is not informative and you should have isolated each part of the image, separately.
len_val = size(bucky_trio,2)/3;
b1 = bucky_trio(:,1:len_val);
m1 = mean(b1(:));
s1 = std(b1(:));
b2 = bucky_trio(:,(len_val+1):(2*len_val));
m2 = mean(b2(:));
s2 = std(b2(:));
b3 = bucky_trio(:,(2*len_val+1):(3*len_val));
m3 = mean(b3(:));
s3 = std(b3(:));
% Keeping the 63 so things are converted to 6 bit depth
first_bucky = max(bucky_trio(:))*1./(1+exp(-1*(bucky_trio - m1)/s1));
second_bucky = max(bucky_trio(:))*1./(1+exp(-1*(bucky_trio - m2)/s2));
third_bucky = max(bucky_trio(:))*1./(1+exp(-1*(bucky_trio - m3)/s3));
imin1=min(min(first_bucky));
first_bucky64=first_bucky-imin1;
imax1= floor(max(max(first_bucky64)));
first_bucky64= floor(first_bucky64/imax1*63);
imin2=min(min(second_bucky));
second_bucky64=second_bucky-imin2;
imax2= floor(max(max(second_bucky64)));
second_bucky64= floor(second_bucky64/imax2*63);
imin3=min(min(third_bucky));
third_bucky64=third_bucky-imin3;
imax3= floor(max(max(third_bucky64)));
third_bucky64= floor(third_bucky64/imax3*63);
figure
colormap(gray)
subplot(3,1,1)
image(first_bucky64)
subplot(3,1,2)
image(second_bucky64)
subplot(3,1,3)
image(third_bucky64)
(c) I've printed out the mean+/-sd for each portion, below as that indicates the rough range of values I focused on for each image. In the top panel, since I was focusing on the range of values in the first portion of the image which are larger, most of the rest of the image had intensities lower than it, making them black in the final result. The second bucky has the range that covers small values, so the tranformed values for the first and third bucky end up being very bright. In the last panel, the range of the third bucky was in the middle, so the bucky with the smaller values comes out dark and with the larger values comes out bright. Note, using these ranges with the window function would yield a similar result.
m1-s1
ans = 116.1672
m1+s1
ans = 233.4588
m2-s2
ans = 5.8084
m2+s2
ans = 11.6729
m3-s3
ans = 23.2334
m3+s3
ans = 46.6918

4.

img=double(imread('~/Dropbox/Research/Teaching/BMI_567_2018/PossibleDatasets/Eyes/images_for_project/distributed/img_029.ppm'));
Taking a quick look at each color channel.
figure
colormap(gray)
subplot(1, 3, 1)
image(floor(img(:,:,1)/255*63))
subplot(1, 3, 2)
image(floor(img(:,:,2)/255*63))
subplot(1, 3, 3)
image(floor(img(:,:,3)/255*63))
(a, b and c) Since the red and green channels show the most detail with the spots, but the green channel has the most contrast, I just used the green channel. In this case the spots are the brightest part of the image, so my window weng from 150 to the max of the image (255). I decided to work with the original images and not the scaled ones here. I'm only scaling images to display them using image(). Otherwise, so I don't lose resolution, I work with the original values. Generally, you should do this too.
% Windowing
img2 = img(:, :, 2);
min_val = 150;
max_val = 255;
slope = 255/(max_val - min_val);
int = -1*min_val*255/(max_val - min_val);
rng = img2>min_val & img2<max_val;
img2_wind = 0*img2;
img2_wind(rng) = img2(rng)*slope + int;
figure
colormap(gray)
image(floor(img2_wind*63/255))

5.

(a) The following is the author's code.
fp=fopen('/Users/jeanettemumford/Dropbox/BMI_567_2017/AMIP_II_CD/LessonData/5_Filtering/SKULLBASE.DCM', 'r');
fseek(fp,1622,'bof');
img=zeros(512,512);
img(:)=fread(fp,(512*512),'short');
img=transpose(img);
fclose(fp);
diffimg=zeros(512,512);
for i=1:511
for j=1:511
diffimg(i,j) = -img(i,j) + img(i+1,j);
end
end
minint=min(min(diffimg));
diffimg_scale=diffimg-minint;
maxint=max(max(diffimg_scale));
diffimg_scale=diffimg_scale/maxint*63;
Usng conv2, I need to recall that the author didn't strictly follow the definition of convolution and his kernel is in the y-direction instead of what he assumed above (x corresponds to the first dimension of the matrix and y is the second dimension of the matrix). I simply used flipud and fliplr to take the convert the operation done above into the kernel I would need for conv2.
ker_unflipped = zeros(3,3);
ker_unflipped(2,2) = -1;
ker_unflipped(3, 2) = 1;
ker = flipud(fliplr(ker_unflipped))
ker =
0 1 0 0 -1 0 0 0 0
diffimg_conv = conv2(img, ker, 'same');
minint=min(min(diffimg_conv));
diffimg_conv_scale=diffimg_conv-minint;
maxint=max(max(diffimg_conv_scale));
diffimg_conv_scale=diffimg_conv_scale/maxint*63;
Aside from some slight differences near the edges, the results are the same. This is due to how conv2 deals with edges. It zero pads and the author's code just set those values to 0s. If you use the 'valid' option with conv2 it will omit the portions of the image that is impacted by the zero padding.
figure
colormap(gray)
subplot(1, 2, 1)
image(diffimg_scale)
title('Using the author code')
subplot(1,2,2)
image(diffimg_conv_scale)
title('Using conv2')
(b) Repeat in the y-direction
diffimgy=zeros(512,512);
for i=1:511
for j=1:511
diffimgy(i,j) = -img(i,j) + img(i,j+1);
end
end
minint=min(min(diffimgy));
diffimgy_scale=diffimgy-minint;
maxint=max(max(diffimgy_scale));
diffimgy_scale=diffimgy_scale/maxint*63;
kery = zeros(3,3);
kery(2,2) = -1;
kery(2, 3) = 1;
kery = flipud(fliplr(kery))
kery =
0 0 0 1 -1 0 0 0 0
diffimgy_conv = conv2(img, kery, 'same');
minint=min(min(diffimgy_conv));
diffimgy_conv_scale=diffimgy_conv-minint;
maxint=max(max(diffimgy_conv_scale));
diffimgy_conv_scale=diffimgy_conv_scale/maxint*63;
figure
colormap(gray)
subplot(1,2,1)
image(diffimgy_scale)
title('Author approach')
subplot(1,2,2)
image(diffimgy_conv_scale)
title('Convolution')
(c) The two kernels as well as the image of the length of the gradient are shown below.
k_xcentral = zeros(3,3);
k_xcentral(1,2) = .5;
k_xcentral(3,2)= -.5
k_xcentral =
0 0.5000 0 0 0 0 0 -0.5000 0
k_ycentral = zeros(3,3);
k_ycentral(2,1) = .5;
k_ycentral(2,3)= -.5
k_ycentral =
0 0 0 0.5000 0 -0.5000 0 0 0
len_grad = sqrt(conv2(img, k_xcentral, 'same').^2 + conv2(img, k_ycentral, 'same').^2);
figure
colormap(gray)
image(len_grad*63/max(len_grad(:)))